63 research outputs found

    The Effect of Different Balance Training Protocols on Athletes with Chronic Ankle Instability: A Systematic Review

    Get PDF
    Chronic Ankle Instability (CAI) is a prevalent topic in the world of sports physical therapy due to the high incidence of ankle sprains among athletes. Differing opinions on methods of treatment for CAI has generated an influx of research regarding the effectiveness of different treatment protocols. Unstable balance training, plyometrics/hop stabilization training, resistance training with balance, and combined interventions are currently being researched to determine their effects on athletes with CAI.https://digitalcommons.misericordia.edu/research_posters2021/1004/thumbnail.jp

    A Retrospective Analysis of User Exposure to (Illicit) Cryptocurrency Mining on the Web

    Get PDF
    In late 2017, a sudden proliferation of malicious JavaScript was reported on the Web: browser-based mining exploited the CPU time of website visitors to mine the cryptocurrency Monero. Several studies measured the deployment of such code and developed defenses. However, previous work did not establish how many users were really exposed to the identified mining sites and whether there was a real risk given common user browsing behavior. In this paper, we present a retroactive analysis to close this research gap. We pool large-scale, longitudinal data from several vantage points, gathered during the prime time of illicit cryptomining, to measure the impact on web users. We leverage data from passive traffic monitoring of university networks and a large European ISP, with suspected mining sites identified in previous active scans. We corroborate our results with data from a browser extension with a large user base that tracks site visits. We also monitor open HTTP proxies and the Tor network for malicious injection of code. We find that the risk for most Web users was always very low, much lower than what deployment scans suggested. Any exposure period was also very brief. However, we also identify a previously unknown and exploited attack vector on mobile devices

    Assessing Cyberbiosecurity Vulnerabilities and Infrastructure Resilience

    Get PDF
    The convergence of advances in biotechnology with laboratory automation, access to data, and computational biology has democratized biotechnology and accelerated the development of new therapeutics. However, increased access to biotechnology in the digital age has also introduced additional security concerns and ultimately, spawned the new discipline of cyberbiosecurity, which encompasses cybersecurity, cyber-physical security, and biosecurity considerations. With the emergence of this new discipline comes the need for a logical, repeatable, and shared approach for evaluating facility and system vulnerabilities to cyberbiosecurity threats. In this paper, we outline the foundation of an assessment framework for cyberbiosecurity, accounting for both security and resilience factors in the physical and cyber domains. This is a unique problem set, but despite the complexity of the cyberbiosecurity field in terms of operations and governance, previous experience developing and implementing physical and cyber assessments applicable to a wide spectrum of critical infrastructure sectors provides a validated point of departure for a cyberbiosecurity assessment framework. This approach proposes to integrate existing capabilities and proven methodologies from the infrastructure assessment realm (e.g., decision science, physical security, infrastructure resilience, cybersecurity) with new expertise and requirements in the cyberbiosecurity space (e.g., biotechnology, biomanufacturing, genomics) in order to forge a flexible and defensible approach to identifying and mitigating vulnerabilities. Determining where vulnerabilities reside within cyberbiosecurity business processes can help public and private sector partners create an assessment framework to identify mitigation options for consideration that are both economically and practically viable and ultimately, allow them to manage risk more effectively

    Influence of diurnal variation in mesophyll conductance on modelled 13C discrimination: results from a field study

    Get PDF
    Mesophyll conductance to CO2 (gm) limits carbon assimilation and influences carbon isotope discrimination (Δ) under most environmental conditions. Current work is elucidating the environmental regulation of gm, but the influence of gm on model predictions of Δ remains poorly understood. In this study, field measurements of Δ and gm were obtained using a tunable diode laser spectroscope coupled to portable photosynthesis systems. These data were used to test the importance of gm in predicting Δ using the comprehensive Farquhar model of Δ (Δcomp), where gm was parameterized using three methods based on: (i) mean gm; (ii) the relationship between stomatal conductance (gs) and gm; and (iii) the relationship between time of day (TOD) and gm. Incorporating mean gm, gs-based gm, and TOD-based gm did not consistently improve Δcomp predictions of field-grown juniper compared with the simple model of Δ (Δsimple) that omits fractionation factors associated with gm and decarboxylation. Sensitivity tests suggest that b, the fractionation due to carboxylation, was lower (25‰) than the value commonly used in Δcomp (29‰) and Δsimple (27‰). These results demonstrate the limits of all tested models in predicting observed juniper Δ, largely due to unexplained offsets between predicted and observed values that were not reconciled in sensitivity tests of variability in gm, b, or e, the day respiratory fractionation

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    A New Method for Measuring Metallicities of Young Super Star Clusters

    Get PDF
    We demonstrate how the metallicities of young super star clusters (SSC) can be measured using novel spectroscopic techniques in the J-band. The near-infrared flux of SSCs older than ~6 Myr is dominated by tens to hundreds of red supergiant stars. Our technique is designed to harness the integrated light of that population and produces accurate metallicities for new observations in galaxies above (M83) and below (NGC 6946) solar metallicity. In M83 we find [Z] = +0.28 ± 0.14 dex using a moderate resolution (R ~ 3500) J-band spectrum and in NGC 6496 we report [Z] = -0.32 ± 0.20 dex from a low resolution spectrum of R ~ 1800. Recently commissioned low resolution multiplexed spectrographs on the Very Large Telescope (KMOS) and Keck (MOSFIRE) will allow accurate measurements of SSC metallicities across the disks of star-forming galaxies up to distances of 70 Mpc with single night observation campaigns using the method presented in this paper

    Red Supergiants as Cosmic Abundance Probes: massive star clusters in M83, and the mass-metallicity relation of nearby galaxies

    Get PDF
    We present an abundance analysis of seven super-star clusters in the disk of M83. The near-infrared spectra of these clusters are dominated by Red Supergiants, and the spectral similarity in the J-band of such stars at uniform metallicity means that the integrated light from the clusters may be analysed using the same tools as those applied to single stars. Using data from VLT/KMOS we estimate metallicities for each cluster in the sample. We find that the abundance gradient in the inner regions of M83 is flat, with a central metallicity of [Z] = 0.21±\pm0.11 relative to a Solar value of ZZ_\odot=0.014, which is in excellent agreement with the results from an analysis of luminous hot stars in the same regions. Compiling this latest study with our other recent work, we construct a mass-metallicity relation for nearby galaxies based entirely on the analysis of RSGs. We find excellent agreement with the other stellar-based technique, that of blue supergiants, as well as with temperature-sensitive (`auroral' or `direct') \hii-region studies. Of all the HII-region strong-line calibrations, those which are empirically calibrated to direct-method studies (N2 and O3N2) provide the most consistent results

    The History and Prehistory of Natural-Language Semantics

    Get PDF
    Contemporary natural-language semantics began with the assumption that the meaning of a sentence could be modeled by a single truth condition, or by an entity with a truth-condition. But with the recent explosion of dynamic semantics and pragmatics and of work on non- truth-conditional dimensions of linguistic meaning, we are now in the midst of a shift away from a truth-condition-centric view and toward the idea that a sentence’s meaning must be spelled out in terms of its various roles in conversation. This communicative turn in semantics raises historical questions: Why was truth-conditional semantics dominant in the first place, and why were the phenomena now driving the communicative turn initially ignored or misunderstood by truth-conditional semanticists? I offer a historical answer to both questions. The history of natural-language semantics—springing from the work of Donald Davidson and Richard Montague—began with a methodological toolkit that Frege, Tarski, Carnap, and others had created to better understand artificial languages. For them, the study of linguistic meaning was subservient to other explanatory goals in logic, philosophy, and the foundations of mathematics, and this subservience was reflected in the fact that they idealized away from all aspects of meaning that get in the way of a one-to-one correspondence between sentences and truth-conditions. The truth-conditional beginnings of natural- language semantics are best explained by the fact that, upon turning their attention to the empirical study of natural language, Davidson and Montague adopted the methodological toolkit assembled by Frege, Tarski, and Carnap and, along with it, their idealization away from non-truth-conditional semantic phenomena. But this pivot in explana- tory priorities toward natural language itself rendered the adoption of the truth-conditional idealization inappropriate. Lifting the truth-conditional idealization has forced semanticists to upend the conception of linguistic meaning that was originally embodied in their methodology
    corecore